Quadratic shrinkage for large covariance matrices

نویسندگان

چکیده

This paper constructs a new estimator for large covariance matrices by drawing bridge between the classic (Stein (1975)) in finite samples and recent progress under large-dimensional asymptotics. The keeps eigenvectors of sample matrix applies shrinkage to inverse eigenvalues. corresponding formula is quadratic: it has two targets weighted quadratic functions concentration (that is, dimension divided size). first target dominates mid-level concentrations second one higher levels. extra degree freedom enables us outperform linear when optimal not linear, which general case. Both our are based on what we term “Stein shrinker”, local attraction operator that pulls eigenvalues towards their nearest neighbors, but whose force diminishes with distance (like gravitation). We prove no cubic or higher-order nonlinearities beat respect Frobenius loss Non-normality case where exceeds size accommodated. Monte Carlo simulations confirm state-of-the-art performance terms accuracy, speed, scalability.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Nonlinear shrinkage estimation of large-dimensional covariance matrices

Many statistical applications require an estimate of a covariance matrix and/or its inverse. Whenthe matrix dimension is large compared to the sample size, which happens frequently, the samplecovariance matrix is known to perform poorly and may suffer from ill-conditioning. There alreadyexists an extensive literature concerning improved estimators in such situations. In the absence offurther kn...

متن کامل

Shrinkage Estimators for High-Dimensional Covariance Matrices

As high-dimensional data becomes ubiquitous, standard estimators of the population covariance matrix become difficult to use. Specifically, in the case where the number of samples is small (large p small n) the sample covariance matrix is not positive definite. In this paper we explore some recent estimators of sample covariance matrices in the large p, small n setting namely, shrinkage estimat...

متن کامل

Direct Nonlinear Shrinkage Estimation of Large-Dimensional Covariance Matrices

This paper introduces a nonlinear shrinkage estimator of the covariance matrix that does not require recovering the population eigenvalues first. We estimate the sample spectral density and its Hilbert transform directly by smoothing the sample eigenvalues with a variable-bandwidth kernel. Relative to numerically inverting the so-called QuEST function, the main advantages of direct kernel estim...

متن کامل

Non-parametric shrinkage mean estimation for quadratic loss functions with unknown covariance matrices

In this paper, a shrinkage estimator for the population mean is proposed under known quadratic loss functions with unknown covariance matrices. The new estimator is nonparametric in the sense that it does not assume a specific parametric distribution for the data and it does not require the prior information on the population covariancematrix. Analytical results on the improvement of the propos...

متن کامل

Shrinkage estimators for large covariance matrices in multivariate real and complex normal distributions under an invariant quadratic loss

The problem of estimating large covariance matrices of multivariate real normal and complex normal distributions is considered when the dimension of the variables is larger than the number of sample size. The Stein-Haff identities and calculus on eigenstructures for singular Wishart matrices are developed for real and complex cases, respectively. By using these techniques, the unbiased risk est...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Bernoulli

سال: 2022

ISSN: ['1573-9759', '1350-7265']

DOI: https://doi.org/10.3150/20-bej1315